Skip to content

Conversation

pyup-bot
Copy link
Contributor

@pyup-bot pyup-bot commented Jun 7, 2018

Update keras from 2.1.6 to 2.2.0.

Changelog

2.2.0

Areas of improvements

- New model definition API: `Model` subclassing.
- New input mode: ability to call models on TensorFlow tensors directly (TensorFlow backend only).
- Improve feature coverage of Keras with the Theano and CNTK backends.
- Bug fixes and performance improvements.
- Large refactors improving code structure, code health, and reducing test time. In particular:
* The Keras engine now follows a much more modular structure.
* The `Sequential` model is now a plain subclass of `Model`.
* The modules `applications` and `preprocessing` are now externalized to their own repositories ([keras-applications](https://github.com/keras-team/keras-applications) and [keras-preprocessing](https://github.com/keras-team/keras-preprocessing)).

API changes

- Add `Model` subclassing API (details below).
- Allow symbolic tensors to be fed to models, with TensorFlow backend (details below).
- Enable CNTK and Theano support for layers `SeparableConv1D`, `SeparableConv2D`, as well as backend methods `separable_conv1d` and `separable_conv2d` (previously only available for TensorFlow).
- Enable CNTK and Theano support for applications `Xception` and `MobileNet` (previously only available for TensorFlow).
- Add `MobileNetV2` application  (available for all backends).
- Enable loading external (non built-in) backends by changing your `~/.keras.json` configuration file (e.g. PlaidML backend).
- Add `sample_weight` in `ImageDataGenerator`.
- Add `preprocessing.image.save_img` utility to write images to disk.
- Default `Flatten` layer's `data_format` argument to `None` (which defaults to global Keras config).
- `Sequential` is now a plain subclass of `Model`. The attribute `sequential.model` is deprecated.
- Add `baseline` argument in `EarlyStopping` (stop training if a given baseline isn't reached).
- Add `data_format` argument to `Conv1D`.
- Make the model returned by `multi_gpu_model` serializable.
- Support input masking in `TimeDistributed` layer.
- Add an `advanced_activation` layer `ReLU`, making the ReLU activation easier to configure while retaining easy serialization capabilities.
- Add `axis=-1` argument in backend crossentropy functions specifying the class prediction axis in the input tensor.

New model definition API : `Model` subclassing

In addition to the `Sequential` API and the functional `Model` API, you may now define models by subclassing the `Model` class and writing your own `call` forward pass:

python
import keras

class SimpleMLP(keras.Model):

 def __init__(self, use_bn=False, use_dp=False, num_classes=10):
     super(SimpleMLP, self).__init__(name='mlp')
     self.use_bn = use_bn
     self.use_dp = use_dp
     self.num_classes = num_classes

     self.dense1 = keras.layers.Dense(32, activation='relu')
     self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
     if self.use_dp:
         self.dp = keras.layers.Dropout(0.5)
     if self.use_bn:
         self.bn = keras.layers.BatchNormalization(axis=-1)

 def call(self, inputs):
     x = self.dense1(inputs)
     if self.use_dp:
         x = self.dp(x)
     if self.use_bn:
         x = self.bn(x)
     return self.dense2(x)

model = SimpleMLP()
model.compile(...)
model.fit(...)


Layers are defined in `__init__(self, ...)`, and the forward pass is specified in `call(self, inputs)`. In `call`, you may specify custom losses by calling `self.add_loss(loss_tensor)` (like you would in a custom layer).

New input mode: symbolic TensorFlow tensors

With Keras 2.2.0 and TensorFlow 1.8 or higher, you may `fit`, `evaluate` and `predict` using symbolic TensorFlow tensors (that are expected to yield data indefinitely). The API is similar to the one in use in `fit_generator` and other generator methods:

python
iterator = training_dataset.make_one_shot_iterator()
x, y = iterator.get_next()

model.fit(x, y, steps_per_epoch=100, epochs=10)

iterator = validation_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.evaluate(x, y, steps=50)


This is achieved by dynamically rewiring the TensorFlow graph to feed the input tensors to the existing model placeholders. There is no performance loss compared to building your model on top of the input tensors in the first place.


Breaking changes

- Remove legacy `Merge` layers and associated functionality (remnant of Keras 0), which were deprecated in May 2016, with full removal initially scheduled for August 2017. Models from the Keras 0 API using these layers cannot be loaded with Keras 2.2.0 and above.
- The `truncated_normal` base initializer now returns values that are scaled by ~0.9 (resulting in correct variance value after truncation). This has a small chance of affecting initial convergence behavior on some models.


Credits

Thanks to our 46 contributors whose commits are featured in this release:

ASvyatkovskiy, AmirAlavi, Anirudh-Swaminathan, DavidAriel, Dref360, JonathanCMitchell, KuzMenachem, PeterChe1990, Saharkakavand, StefanoCappellini, ageron, askskro, bileschi, bonlime, bottydim, brge17, briannemsick, bzamecnik, christian-lanius, clemens-tolboom, dschwertfeger, dynamicwebpaige, farizrahman4u, fchollet, fuzzythecat, ghostplant, giuscri, huyu398, jnphilipp, masstomato, morenoh149, mrTsjolder, nittanycolonial, r-kellerm, reidjohnson, roatienza, sbebo, stevemurr, taehoonlee, tiferet, tkoivisto, tzerrell, vkk800, wangkechn, wouterdobbels, zwang36wang

2.1.6

Areas of improvement

- Bug fixes
- Documentation improvements
- Minor usability improvements

API changes

- In callback `ReduceLROnPlateau`, rename `epsilon` argument to `min_delta` (backwards-compatible).
- In callback `RemoteMonitor`, add argument `send_as_json`.
- In backend `softmax` function, add argument `axis`.
- In `Flatten` layer, add argument `data_format`.
- In `save_model` (`Model.save`) and `load_model` functions, allow the `filepath` argument to be a `h5py.File` object.
- In `Model.evaluate_generator`, add `verbose` argument.
- In `Bidirectional` wrapper layer, add `constants` argument.
- In `multi_gpu_model` function, add arguments `cpu_merge` and `cpu_relocation` (controlling whether to force the template model's weights to be on CPU, and whether to operate merge operations on CPU or GPU).
- In `ImageDataGenerator`, allow argument `width_shift_range` to be `int` or 1D array-like.

Breaking changes

This release does not include any known breaking changes.

Credits

Thanks to our 37 contributors whose commits are featured in this release:

Dref360, FirefoxMetzger, Naereen, NiharG15, StefanoCappellini, WindQAQ, dmadeka, edrogers, eltronix, farizrahman4u, fchollet, gabrieldemarmiesse, ghostplant, jedrekfulara, jlherren, joeyearsley, johanahlqvist, johnyf, jsaporta, kalkun, lucasdavid, masstomato, mrlzla, myutwo150, nisargjhaveri, obi1kenobi, olegantonyan, ozabluda, pasky, planck35, sotlampr, souptc, srjoglekar246, stamate, taehoonlee, vkk800, xuhdev

2.1.5

Areas of improvement

- Bug fixes.
- New APIs: sequence generation API `TimeseriesGenerator`, and new layer `DepthwiseConv2D`.
- Unit tests / CI improvements.
- Documentation improvements.

API changes

- Add new sequence generation API `keras.preprocessing.sequence.TimeseriesGenerator`.
- Add new convolutional layer `keras.layers.DepthwiseConv2D`.
- Allow weights from `keras.layers.CuDNNLSTM` to be loaded into a `keras.layers.LSTM` layer (e.g. for inference on CPU).
- Add `brightness_range` data augmentation argument in `keras.preprocessing.image.ImageDataGenerator`.
- Add `validation_split` API in `keras.preprocessing.image.ImageDataGenerator`. You can pass `validation_split` to the constructor (float), then select between training/validation subsets by passing the argument `subset='validation'` or `subset='training'` to methods `flow` and `flow_from_directory`.

Breaking changes

- As a side effect of a refactor of `ConvLSTM2D` to a modular implementation, recurrent dropout support in Theano has been dropped for this layer.

Credits

Thanks to our 28 contributors whose commits are featured in this release:

DomHudson, Dref360, VitamintK, abrad1212, ahundt, bojone, brainnoise, bzamecnik, caisq, cbensimon, davinnovation, farizrahman4u, fchollet, gabrieldemarmiesse, khosravipasha, ksindi, lenjoy, masstomato, mewwts, ozabluda, paulpister, sandpiturtle, saralajew, srjoglekar246, stefangeneralao, taehoonlee, tiangolo, treszkai

2.1.4

Areas of improvement

- Bug fixes
- Performance improvements
- Improvements to example scripts

API changes

- Allow for stateful metrics in `model.compile(..., metrics=[...])`. A stateful metric inherits from `Layer`, and implements `__call__` and `reset_states`.
- Support `constants` argument in `StackedRNNCells`.
- Enable some TensorBoard features in the `TensorBoard` callback (loss and metrics plotting) with non-TensorFlow backends.
- Add `reshape` argument in `model.load_weights()`, to optionally reshape weights being loaded to the size of the target weights in the model considered.
- Add `tif` to supported formats in `ImageDataGenerator`.
- Allow auto-GPU selection in `multi_gpu_model()` (set `gpus=None`).
- In `LearningRateScheduler` callback, the scheduling function now takes an argument: `lr`, the current learning rate.

Breaking changes

- In `ImageDataGenerator`, change default interpolation of image transforms from nearest to bilinear. This should probably not break any users, but it is a change of behavior.

Credits

Thanks to our 37 contributors whose commits are featured in this release:

DalilaSal, Dref360, GalaxyDream, GarrisonJ, Max-Pol, May4m, MiliasV, MrMYHuang, N-McA, Vijayabhaskar96, abrad1212, ahundt, angeloskath, bbabenko, bojone, brainnoise, bzamecnik, caisq, cclauss, dsadulla, fchollet, gabrieldemarmiesse, ghostplant, gorogoroyasu, icyblade, kapsl, kevinbache, mendesmiguel, mikesol, myutwo150, ozabluda, sadreamer, simra, taehoonlee, veniversum, yongtang, zhangwj618

2.1.3

Areas of improvement

- Performance improvements (esp. convnets with TensorFlow backend).
- Usability improvements.
- Docs & docstrings improvements.
- New models in the `applications` module.
- Bug fixes.

API changes

- `trainable` attribute in `BatchNormalization` now disables the updates of the batch statistics (i.e. if `trainable == False` the layer will now run 100% in inference mode).
- Add `amsgrad` argument in `Adam` optimizer.
- Add new applications: `NASNetMobile`, `NASNetLarge`, `DenseNet121`, `DenseNet169`, `DenseNet201`.
- Add `Softmax` layer (removing need to use a `Lambda` layer in order to specify the `axis` argument).
- Add `SeparableConv1D` layer.
- In `preprocessing.image.ImageDataGenerator`, allow `width_shift_range` and `height_shift_range` to take integer values (absolute number of pixels)
- Support `return_state` in `Bidirectional` applied to RNNs (`return_state` should be set on the child layer).
- The string values `"crossentropy"` and `"ce"` are now allowed in the `metrics` argument (in `model.compile()`), and are routed to either `categorical_crossentropy` or `binary_crossentropy` as needed.
- Allow `steps` argument in `predict_*` methods on the `Sequential` model.
- Add `oov_token` argument in `preprocessing.text.Tokenizer`.

Breaking changes

- In `preprocessing.image.ImageDataGenerator`, `shear_range` has been switched to use degrees rather than radians (for consistency). This should not actually break anything (neither training nor inference), but keep this change in mind in case you see any issues with regard to your image data augmentation process.


Credits

Thanks to our 45 contributors whose commits are featured in this release:

Dref360, OliPhilip, TimZaman, bbabenko, bdwyer2, berkatmaca, caisq, decrispell, dmaniry, fchollet, fgaim, gabrieldemarmiesse, gklambauer, hgaiser, hlnull, icyblade, jgrnt, kashif, kouml, lutzroeder, m-mohsen, mab4058, manashty, masstomato, mihirparadkar, myutwo150, nickbabcock, novotnj3, obsproth, ozabluda, philferriere, piperchester, pstjohn, roatienza, souptc, spiros, srs70187, sumitgouthaman, taehoonlee, tigerneil, titu1994, tobycheese, vitaly-krumins, yang-zhang, ziky90

2.1.2

Areas of improvement

- Bug fixes and performance improvements.
- API improvements in Keras applications, generator methods.

API changes

- Make `preprocess_input` in all Keras applications compatible with both Numpy arrays and symbolic tensors (previously only supported Numpy arrays).
- Allow the `weights` argument in all Keras applications to accept the path to a custom weights file to load (previously only supported the built-in `imagenet` weights file).
- `steps_per_epoch` behavior change in generator training/evaluation methods:
 - If specified, the specified value will be used (previously, in the case of generator of type `Sequence`, the specified value was overridden by the `Sequence` length)
 - If unspecified and if the generator passed is a `Sequence`, we set it to the `Sequence` length.
- Allow `workers=0` in generator training/evaluation methods (will run the generator in the main process, in a blocking way).
- Add `interpolation` argument in `ImageDataGenerator.flow_from_directory`, allowing a custom interpolation method for image resizing.
- Allow `gpus` argument in `multi_gpu_model` to be a list of specific GPU ids.

Breaking changes

- The change in `steps_per_epoch` behavior (described above) may affect some users.

Credits

Thanks to our 26 contributors whose commits are featured in this release:

Alex1729, alsrgv, apisarek, asos-saul, athundt, cherryunix, dansbecker, datumbox, de-vri-es, drauh, evhub, fchollet, heath730, hgaiser, icyblade, jjallaire, knaveofdiamonds, lance6716, luoch, mjacquem1, myutwo150, ozabluda, raviksharma, rh314, yang-zhang, zach-nervana

2.1.1

This release amends release 2.1.0 to include a fix for an erroneous breaking change introduced in 8419.

2.1.0

This is a small release that fixes outstanding bugs that were reported since the previous release.

Areas of improvement

- Bug fixes (in particular, Keras no longer allocates devices at startup time with the TensorFlow backend. This was causing issues with Horovod.)
- Documentation and docstring improvements.
- Better CIFAR10 ResNet example script and improvements to example scripts code style.

API changes

- Add `go_backwards` to cuDNN RNNs (enables `Bidirectional` wrapper on cuDNN RNNs).
- Add ability to pass `fetches` to `K.Function()` with the TensorFlow backend.
- Add `steps_per_epoch` and `validation_steps` arguments in `Sequential.fit()` (to sync it with `Model.fit()`).

Breaking changes

None.

Credits

Thanks to our 14 contributors whose commits are featured in this release:

Dref360, LawnboyMax, anj-s, bzamecnik, datumbox, diogoff, farizrahman4u, fchollet, frexvahi, jjallaire, nsuh, ozabluda, roatienza, yakigac

2.0.9

Areas of improvement

- RNN improvements:
 - Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the `RNN` base class.
 - Add ability to create new RNN cells by stacking a list of cells, allowing for efficient stacked RNNs.
- Add `CuDNNLSTM` and `CuDNNGRU` layers, backend by NVIDIA's cuDNN library for fast GPU training & inference.
- Add RNN Sequence-to-sequence example script.
- Add `constants` argument in `RNN`'s `call` method, making RNN attention easier to implement.
- Easier multi-GPU data parallelism via `keras.utils.multi_gpu_model`.
- Bug fixes & performance improvements (in particular, native support for NCHW data layout in TensorFlow).
- Documentation improvements and examples improvements.



API changes

- Add "fashion mnist" dataset as `keras.datasets.fashion_mnist.load_data()`
- Add `Minimum` merge layer as `keras.layers.Minimum` (class) and `keras.layers.minimum(inputs)` (function)
- Add `InceptionResNetV2` to `keras.applications`.
- Support `bool` variables in TensorFlow backend.
- Add `dilation` to `SeparableConv2D`.
- Add support for dynamic `noise_shape` in `Dropout`
- Add `keras.layers.RNN()` base class for batch-level RNNs (used to implement custom RNN layers from a cell class).
- Add `keras.layers.StackedRNNCells()` layer wrapper, used to stack a list of RNN cells into a single cell.
- Add `CuDNNLSTM` and `CuDNNGRU` layers.
- Deprecate `implementation=0` for RNN layers.
- The Keras progbar now reports time taken for each past epoch, and average time per step.
- Add option to specific resampling method in `keras.preprocessing.image.load_img()`.
- Add `keras.utils.multi_gpu_model` for easy multi-GPU data parallelism.
- Add `constants` argument in `RNN`'s `call` method, used to pass a list of constant tensors to the underlying RNN cell.

Breaking changes

- Implementation change in `keras.losses.cosine_proximity` results in a different (correct) scaling behavior.
- Implementation change for samplewise normalization in `ImageDataGenerator` results in a different normalization behavior.

Credits

Thanks to our 59 contributors whose commits are featured in this release!

Alok, Danielhiversen, Dref360, HelgeS, JakeBecker, MPiecuch, MartinXPN, RitwikGupta, TimZaman, adammenges, aeftimia, ahojnnes, akshaychawla, alanyee, aldenks, andhus, apbard, aronj, bangbangbear, bchu, bdwyer2, bzamecnik, cclauss, colllin, datumbox, deltheil, dhaval067, durana, ericwu09, facaiy, farizrahman4u, fchollet, flomlo, fran6co, grzesir, hgaiser, icyblade, jsaporta, julienr, jussihuotari, kashif, lucashu1, mangerlahn, myutwo150, nicolewhite, noahstier, nzw0301, olalonde, ozabluda, patrikerdes, podhrmic, qin, raelg, roatienza, shadiakiki1986, smgt, souptc, taehoonlee, y0z

2.0.8

The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.

No API changes for this release. A few bug fixes.

2.0.7

Areas of improvement

- Bug fixes.
- Performance improvements.
- Documentation improvements.
- Better support for training models from data tensors in TensorFlow (e.g. Datasets, TFRecords). Add a related example script.
- Improve TensorBoard UX with better grouping of ops into name scopes.
- Improve test coverage.

API changes

- Add `clone_model` method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model.
-  Add `target_tensors` argument in `compile`, enabling to use custom tensors or placeholders as model targets.
- Add `steps_per_epoch` argument in `fit`, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays.
- Similarly, add `steps` argument in `predict` and `evaluate`.
- Add `Subtract` merge layer, and associated layer function `subtract`.
- Add `weighted_metrics` argument in `compile` to specify metric functions meant to take into account `sample_weight` or `class_weight`.
- Make the `stop_gradients` backend function consistent across backends.
- Allow dynamic shapes in `repeat_elements` backend function.
- Enable stateful RNNs with CNTK.

Breaking changes

- The backend methods `categorical_crossentropy`, `sparse_categorical_crossentropy`, `binary_crossentropy` had the order of their positional arguments (`y_true`, `y_pred`) inverted. This change does not affect the `losses` API. This change was done to achieve API consistency between the `losses` API and the backend API.
- Move constraint management to be based on variable attributes. Remove the now-unused `constraints` attribute on layers and models (not expected to affect any user).

Credits

Thanks to our 47 contributors whose commits are featured in this release!

5ke, Alok, Danielhiversen, Dref360, NeilRon, abnera, acburigo, airalcorn2, angeloskath, athundt, brettkoonce, cclauss, denfromufa, enkait, erg, ericwu09, farizrahman4u, fchollet, georgwiese, ghisvail, gokceneraslan, hgaiser, inexxt, joeyearsley, jorgecarleitao, kennyjacob, keunwoochoi, krizp, lukedeo, milani, n17r4m, nicolewhite, nigeljyng, nyghtowl, nzw0301, rapatel0, souptc, srinivasreddy, staticfloat, taehoonlee, td2014, titu1994, tleeuwenburg, udibr, waleedka, wassname, yashk2810

2.0.6

Areas of improvement

- Improve generator methods (`predict_generator`, `fit_generator`, `evaluate_generator`) and add data enqueuing utilities.
- Bug fixes and performance improvements.
- New features: new `Conv3DTranspose` layer, new `MobileNet` application, self-normalizing networks.

API changes

- Self-normalizing networks: add `selu` activation function, `AlphaDropout` layer, `lecun_normal` initializer.
- Data enqueuing: add `Sequence`, `SequenceEnqueuer`, `GeneratorEnqueuer` to `utils`.
- Generator methods: rename arguments `pickle_safe` (replaced with `use_multiprocessing`) and `max_q_size ` (replaced with `max_queue_size`).
- Add `MobileNet` to the applications module.
- Add `Conv3DTranspose` layer.
- Allow custom print functions for model's `summary` method (argument `print_fn`).

2.0.5

- Add beta CNTK backend.
- TensorBoard improvements.
- Documentation improvements.
- Bug fixes and performance improvements.
- Improve style transfer example script.

API changes:

- Add `return_state` constructor argument to RNNs.
- Add `skip_compile` option to `load_model`.
- Add `categorical_hinge` loss function.
- Add `sparse_top_k_categorical_accuracy` metric.
- Add new options to `TensorBoard` callback.
- Add `TerminateOnNaN` callback.
- Generalize the `Embedding` layer to N (>=2) input dimensions.

2.0.4

- Documentation improvements.
- Docstring improvements.
- Update some examples scripts (in particular, new deep dream example).
- Bug fixes and performance improvements.

API changes:

- Add `logsumexp` and `identity` to backend.
- Add `logcosh` loss.
- New signature for `add_weight` in `Layer`.
- `get_initial_states` in `Recurrent` is now `get_initial_state`.

2.0.0

Keras 2 release notes

This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.

Training

- The `nb_epoch` argument has been renamed `epochs` everywhere.
- The methods `fit_generator`, `evaluate_generator` and `predict_generator` now work by drawing a number of *batches* from a generator (number of training steps), rather than a number of samples.
 - `samples_per_epoch` was renamed `steps_per_epoch` in `fit_generator`.
 - `nb_val_samples` was renamed `validation_steps` in `fit_generator`.
 - `val_samples` was renamed `steps` in `evaluate_generator` and `predict_generator`.
- It is now possible to manually add a loss to a model by calling `model.add_loss(loss_tensor)`. The loss is added to the other losses of the model and minimized during training.
- It is also possible to *not* apply any loss to a specific model output. If you pass `None` as the `loss` argument for an output (e.g. in compile, `loss={'output_1': None, 'output_2': 'mse'}`, the model will expect no Numpy arrays to be fed for this output when using `fit`, `train_on_batch`, or `fit_generator`. The output values are still returned as usual when using `predict`.
- In TensorFlow, models can now be trained using `fit` if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See [this test](https://github.com/fchollet/keras/blob/master/tests/keras/engine/test_training.pyL252) for specific examples.


Losses & metrics

- The `objectives` module has been renamed `losses`.
- Several legacy metric functions have been removed, namely `matthews_correlation`, `precision`, `recall`, `fbeta_score`, `fmeasure`.
- Custom metric functions can no longer return a dict, they must return a single tensor.


Models

- Constructor arguments for `Model` have been renamed:
 - `input` -> `inputs`
 - `output` -> `outputs`
- The `Sequential` model not longer supports the `set_input` method.
- For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.


Layers

Removals

Deprecated layers `MaxoutDense`, `Highway` and `TimedistributedDense` have been removed.


Call method

- All layers that use the learning phase now support a `training` argument in `call` (Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling a `Dropout` instance as `dropout(inputs, training=True)` you obtain a layer that will always apply dropout, regardless of the current global learning phase. The `training` argument defaults to the global Keras learning phase everywhere.
- The `call` method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature like `call(inputs, alpha=0.5)`, and then pass a `alpha` keyword argument when calling the layer (only with the functional API, naturally).
- `__call__` now makes use of TensorFlow `name_scope`, so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.

All layers taking a legacy `dim_ordering` argument

`dim_ordering` has been renamed `data_format`. It now takes two values: `"channels_first"` (formerly `"th"`) and `"channels_last"` (formerly `"tf"`).

Dense layer

Changed interface:

- `output_dim` -> `units`
- `init` -> `kernel_initializer`
- added `bias_initializer` argument
- `W_regularizer` -> `kernel_regularizer`
- `b_regularizer` -> `bias_regularizer`
- `b_constraint` -> `bias_constraint`
- `bias` -> `use_bias`

Dropout, SpatialDropout*D, GaussianDropout

Changed interface:

- `p` -> `rate`

Embedding

Convolutional layers

- The `AtrousConvolution1D` and `AtrousConvolution2D` layer have been deprecated. Their functionality is instead supported via the `dilation_rate` argument in `Convolution1D` and `Convolution2D` layers.
- `Convolution*` layers are renamed `Conv*`.
- The `Deconvolution2D` layer is renamed `Conv2DTranspose`.
- The `Conv2DTranspose` layer no longer requires an `output_shape` argument, making its use much easier.

Interface changes common to all convolutional layers:

- `nb_filter` -> `filters`
- float kernel dimension arguments become a single tuple argument, `kernel` size. E.g. a legacy call `Conv2D(10, 3, 3)` becomes `Conv2D(10, (3, 3))`
- `kernel_size` can be set to an integer instead of a tuple, e.g. `Conv2D(10, 3)` is equivalent to `Conv2D(10, (3, 3))`.
- `subsample` -> `strides`. Can also be set to an integer.
- `border_mode` -> `padding`
- `init` -> `kernel_initializer`
- added `bias_initializer` argument
- `W_regularizer` -> `kernel_regularizer`
- `b_regularizer` -> `bias_regularizer`
- `b_constraint` -> `bias_constraint`
- `bias` -> `use_bias`
- `dim_ordering` -> `data_format`
- In the `SeparableConv2D` layers, `init` is split into `depthwise_initializer` and `pointwise_initializer`.
- Added `dilation_rate` argument in `Conv2D` and `Conv1D`.
- 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
- 2D and 3D convolution kernels are now saved in format `spatial_dims + (input_depth, depth))`, even with `data_format="channels_first"`.


Pooling1D

- `pool_length` -> `pool_size`
- `stride` -> `strides`
- `border_mode` -> `padding`

Pooling2D, 3D

- `border_mode` -> `padding`
- `dim_ordering` -> `data_format`


ZeroPadding layers

The `padding` argument of the `ZeroPadding2D` and `ZeroPadding3D` layers must be a tuple of length 2 and 3 respectively. Each entry `i` contains by how much to pad the spatial dimension `i`. If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.

Upsampling1D

- `length` -> `size`

BatchNormalization

The `mode` argument of `BatchNormalization` has been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).

- `beta_init` -> `beta_initializer`
- `gamma_init` -> `gamma_initializer`
- added arguments `center`, `scale` (booleans, whether to use a `beta` and `gamma` respectively)
- added arguments `moving_mean_initializer`, `moving_variance_initializer`
- added arguments `beta_regularizer`, `gamma_regularizer`
- added arguments `beta_constraint`, `gamma_constraint`
- attribute `running_mean` is renamed `moving_mean`
- attribute `running_std` is renamed `moving_variance` (it *is* in fact a variance with the current implementation).


ConvLSTM2D

Same changes as for convolutional layers and recurrent layers apply.

PReLU

- `init` -> `alpha_initializer`

GaussianNoise

- `sigma` -> `stddev`

Recurrent layers

- `output_dim` -> `units`
- `init` -> `kernel_initializer`
- `inner_init` -> `recurrent_initializer`
- added argument `bias_initializer`
- `W_regularizer` -> `kernel_regularizer`
- `b_regularizer` -> `bias_regularizer`
- added arguments `kernel_constraint`, `recurrent_constraint`, `bias_constraint`
- `dropout_W` -> `dropout`
- `dropout_U` -> `recurrent_dropout`
- `consume_less` -> `implementation`. String values have been replaced with integers: implementation 0 (default), 1 or 2.
- LSTM only: the argument `forget_bias_init` has been removed. Instead there is a boolean argument `unit_forget_bias`, defaulting to `True`.


Lambda

The `Lambda` layer now supports a `mask` argument.


Utilities

Utilities should now be imported from `keras.utils` rather than from specific submodules (e.g. no more `keras.utils.np_utils...`).


Backend

random_normal and truncated_normal
- `std` -> `stddev`

Misc

- In the backend, `set_image_ordering` and `image_ordering` are now `set_data_format` and `data_format`.
- Any arguments (other than `nb_epoch`) prefixed with `nb_` has been renamed to be prefixed with `num_` instead. This affects two datasets and one preprocessing utility.
Links

@DEKHTIARJonathan DEKHTIARJonathan self-requested a review June 7, 2018 13:55
@DEKHTIARJonathan DEKHTIARJonathan merged commit b5b3b22 into master Jun 7, 2018
@DEKHTIARJonathan DEKHTIARJonathan deleted the pyup-scheduled-update-2018-06-07 branch June 7, 2018 14:28
luomai pushed a commit that referenced this pull request Nov 21, 2018
#684)

* Pin keras to latest version 2.2.0

* Keras unpinned and set to range

* Conv Layers File Cleaning

* Bug Fix not saving activation fx in layers: BatchNorm and ElementWiseLambdaLayer

* 1.8.6rc4 released

* String Formating Fixed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants